Conversational (History) (Generative Models)

Synopsis

Applies a Conversational model for a given conversation history.

Description

Applies a Conversational model for a given conversation history. The conversation needs to be delivered as data table with at least two columns: one for the role of each message and one for the content of each message. Roles are typically 'system', 'user', or 'assistant'. These models can provide answers to conversational input. For example, a model could answer to the pair (role: 'user', content: 'Hi, how are you?') with the answer (role: 'assistant', content: 'Thanks, I am good. How about yourself?'). Unlike all other task operators this will create a new data table as output with the answer of the model instead of adding an additional column or row. If the result is a single row, this may later be appended to the original input data of course.

Input

  • data (Data table)

    The data containing the conversation so far. Each row represents a message in the conversation. The table needs at least two columns, one for the role and one for the content.

  • model (File)

    The optional model directory (in your project / repository or your file system). Has to be provided if the parameter "use local model" is true. Typically, this is only necessary if you want to use your own finetuned local version of a model.

Output

  • data (Data table)

    The result data with the answer of the model.

  • model (File)

    The model directory which has been delivered as input.

Parameters

  • use local model Indicates if a local model should be used based on a local file directory or if a model should be used from the Huggingface portal. If a local model is to be used, all task operators require a file object referencing to the model directory as a second input. If this parameter is unchecked, you will need to specify the full model name coming from the Huggingface portal for the “model” parameter.
  • model The model from the Huggingface portal which will be used by the operator. Only used when the “use local model” parameter is unchecked. The model name needs to be the full model name as found on each model card on the Huggingface portal. Please be aware that using large models can result in downloads of many gigabytes of data and models will be stored in a local cache.
  • role The name of the role column, typically contains values such as 'system', 'user', or 'assistant'.
  • content The name of the content column which contains the actual messages in the previous conversation for each of the roles.
  • device Where the inference should take place. Either on a GPU, a CPU, or Apple’s MPS architecture. If set to Automatic, the training will prefer the GPU if available and will fall back to CPU otherwise.
  • device indices If you have multiple GPUs and computation is set up to happen on GPUs you can specify which ones are used with this parameter. Counting of devices starts with 0. The default of “0” means that the first GPU device in the system will be used, a value of “1” would refer to the second and so on. You can utilize multiple GPUs by providing a comma-separated list of device indices. For example, you could use “0,1,2,3” on a machine with four GPUs if all four should be utilized. Please note that RapidMiner performs data-parallel computation which means that the model needs to be small enough to be completely loaded on each of your GPUs.
  • data type Specifies the data type under which the model should be loaded. Using lower precisions can reduce memory usage while leading to slightly less accurate results in some cases. If set to “auto” the data precision is derived from the model itself.
  • revision The specific model version to use. The default is “main”. The value can be a branch name, a tag name, or a commit id of the model in the Huggingface git repository.
  • trust remote code Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files.
  • conda environment The conda environment used for this model task. Additional packages may be installed into this environment, please refer to the extension documentation for additional details on this and on version requirements.

Tutorial Processes

Using a conversational model on a conversation history

This tutorial shows how to use a conversational model on a conversation history. It creates some conversation and feeds them into the task operator. You can also deliver a local model using the second operator input or specify a different model from Huggingface using the model parameter.